Chapter 7 Drone Remote Pilot: Camera Model#

  1. Introduction: Camera Model

  2. Simulation: Lens simulation

  3. Simulation:

  4. Simulation:

  5. Self-Assessment

1. Introduction#

1. Introduction#

2. Simulation#

๐Ÿ‡ Interactive Lens Simulation in Python#

This code simulates the visual effects of different camera lens settings on an image (a rabbit photo), using interactive sliders with ipywidgets.


๐Ÿ”ง What It Does:#

  • Loads and resizes a sample image (rabbit.jpg)

  • Provides sliders to adjust:

    • ISO sensitivity

    • Focal length

    • Shutter speed

    • Distance to object

  • Applies blur based on focal length and distance

  • Adjusts brightness based on ISO and shutter speed

  • Updates image in real-time when sliders change


๐ŸŽ›๏ธ How It Works:#

  1. Widgets: Sliders created using ipywidgets.IntSlider and FloatSlider

  2. Image Manipulation: Performed using PIL.ImageFilter and ImageEnhance

  3. Interactivity: .observe() links sliders to simulate_image() function

  4. Display: Final image shown in a matplotlib plot with no axis for clarity


๐Ÿ–ผ๏ธ Use Case:#

Great for educational demos to show how photographic parameters impact image quality โ€” especially in teaching exposure and depth of field concepts.

import matplotlib.pyplot as plt import ipywidgets as widgets from IPython.display import display from PIL import Image, ImageFilter, ImageEnhance import numpy as np

๐Ÿ‡ Load your rabbit image#

img = Image.open(โ€œrabbit.jpgโ€).convert(โ€œRGBโ€) img = img.resize((300, 300))

๐ŸŽš๏ธ Sliders for lens parameters#

iso_slider = widgets.IntSlider(value=100, min=100, max=6400, step=100, description=โ€™ISO:โ€™) focal_slider = widgets.IntSlider(value=50, min=18, max=300, step=5, description=โ€™Focal Length (mm):โ€™) shutter_slider = widgets.FloatSlider(value=1/125, min=1/4000, max=1, step=0.01, description=โ€™Shutter Speed (s):โ€™) distance_slider = widgets.FloatSlider(value=2.0, min=0.5, max=10.0, step=0.1, description=โ€™Distance to Object (m):โ€™) output = widgets.Output()

๐ŸŽจ Simulation function#

def simulate_image(change=None): output.clear_output() ISO = iso_slider.value focal = focal_slider.value shutter = shutter_slider.value distance = distance_slider.value

# โž• Simulate blur: longer focal length and further distance = more blur
blur_radius = min(5, (focal / 100) * (distance / 2))
blurred = img.filter(ImageFilter.GaussianBlur(radius=blur_radius))

# ๐Ÿ”… Simulate brightness: ISO and shutter speed affect exposure
exposure_factor = min(2.0, (ISO / 400) * shutter)
enhancer = ImageEnhance.Brightness(blurred)
final_image = enhancer.enhance(exposure_factor)

with output:
    plt.figure(figsize=(4, 4))
    plt.imshow(final_image)
    plt.axis('off')
    plt.title(f"ISO:{ISO} | Focal:{focal}mm | Shutter:{shutter:.3f}s | Distance:{distance}m")
    plt.show()

๐Ÿ”„ Interactivity#

for slider in [iso_slider, focal_slider, shutter_slider, distance_slider]: slider.observe(simulate_image, names=โ€™valueโ€™)

๐Ÿš€ Display the interface#

display(widgets.VBox([iso_slider, focal_slider, shutter_slider, distance_slider, output])) simulate_image()

3. Simulation#

๐Ÿ” Interactive Stereo Depth Estimation Using Image Parallax#

This Python widget simulates stereo vision depth perception from a single image using parallax cropping.


๐Ÿง  Core Concept#

It visualizes how camera baseline, focal length, and image disparity affect depth estimation:

\[ Z = \frac{f \cdot B}{d} \]

Where:

  • \(( Z \)): Estimated depth (meters)

  • \(( f \)): Focal length (pixels)

  • \(( B \)): Baseline (meters)

  • \(( d \)): Disparity (pixels)


๐ŸŽ›๏ธ Inputs (Interactive Sliders)#

Parameter

Type

Range

Description

Camera Baseline

FloatSlider

0.01 โ€“ 1.0 m

Distance between stereo cameras

Focal Length

FloatSlider

100 โ€“ 2000 px

Camera lens focal length

Disparity

IntSlider

1 โ€“ 100 px

Horizontal pixel shift between views


๐Ÿ“ Visual Output#

  • Side-by-side display of two cropped views simulating left/right camera images.

  • Title shows real-time estimated object distance.

  • Updates instantly when sliders are changed.


๐Ÿ’ก Interpretation Tips#

  • ๐Ÿ“‰ Larger disparity โ†’ object is closer

  • ๐Ÿ“ Larger baseline or focal length โ†’ more accurate depth

  • ๐Ÿงช Great for visualizing stereo geometry in drones or vision systems


Use this widget to experiment with stereo depth principles, ideal for UAV vision calibration or teaching computer vision fundamentals.

import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import display
from PIL import Image
import numpy as np

# ๐Ÿ‡ Load rabbit image
img = Image.open("rabbit.jpg").convert("RGB").resize((300, 300))

# ๐ŸŽ›๏ธ Stereo baseline and disparity sliders
baseline_slider = widgets.FloatSlider(value=0.1, min=0.01, max=1.0, step=0.01, description='Camera Baseline (m):')
focal_slider = widgets.FloatSlider(value=800, min=100, max=2000, step=50, description='Focal Length (pixels):')
disparity_slider = widgets.IntSlider(value=10, min=1, max=100, step=1, description='Disparity (pixels):')
output = widgets.Output()

# ๐Ÿ“ Parallax Distance Estimation Function
def update_estimate(change=None):
    output.clear_output()
    baseline = baseline_slider.value
    focal_length = focal_slider.value
    disparity = disparity_slider.value

    # Estimate depth: Z = (f * B) / d
    depth = (focal_length * baseline) / disparity

    # Simulate parallax by shifting image
    shift_pixels = disparity
    img_shifted = np.array(img)
    left_view = img_shifted[:, :-shift_pixels]
    right_view = img_shifted[:, shift_pixels:]

    with output:
        fig, axs = plt.subplots(1, 2, figsize=(8, 4))
        axs[0].imshow(left_view)
        axs[0].set_title("Left Camera View")
        axs[0].axis('off')
        axs[1].imshow(right_view)
        axs[1].set_title("Right Camera View")
        axs[1].axis('off')
        plt.suptitle(f"Estimated Distance to Object โ‰ˆ {depth:.2f} meters", fontsize=14)
        plt.tight_layout()
        plt.show()

# ๐Ÿ”„ Attach interactivity
for slider in [baseline_slider, focal_slider, disparity_slider]:
    slider.observe(update_estimate, names='value')

# ๐Ÿš€ Display UI
display(widgets.VBox([baseline_slider, focal_slider, disparity_slider, output]))
update_estimate()

4. Simulation#

๐ŸŽฏ Stereo Depth Estimation with OpenCV and IPython Widgets#

This code creates an interactive tool for estimating depth using stereo images and block matching:


๐Ÿ’ก What It Does#

  • Loads grayscale stereo images (left.jpeg and right.jpeg)

  • Uses OpenCVโ€™s StereoBM algorithm to compute a disparity map

  • Calculates depth for each pixel using:

    \[ \text{Depth} = \frac{\text{Focal Length} \cdot \text{Baseline}}{\text{Disparity}} \]
  • Displays:

    • Left and right views

    • Computed disparity map

    • Estimated depth map

  • Allows interactive adjustment of:

    • Camera Baseline (distance between cameras)

    • Focal Length (in pixels)

    • Block Size (used for stereo matching window)


๐Ÿ–ฅ๏ธ Ideal Use Case#

Visualizing depth from stereo imagery and testing stereo block matching parameters for UAV or robotics applications.

import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import display
import cv2
import numpy as np

# ๐Ÿ“ท Load stereo images
img_left = cv2.imread("left.jpeg", cv2.IMREAD_GRAYSCALE)
img_right = cv2.imread("right.jpeg", cv2.IMREAD_GRAYSCALE)

# โœ… Validate image load
if img_left is None or img_right is None:
    raise FileNotFoundError("โŒ One or both images could not be loaded. Check the file paths.")

# ๐Ÿ“ Estimate block size heuristically
def estimate_block_size(img_left, img_right):
    height, width = img_left.shape
    min_dim = min(height, width)

    # Heuristic: set block size between 5 and 41 based on image dimensions
    estimated = int(min_dim * 0.025)
    if estimated % 2 == 0:
        estimated += 1
    return max(5, min(estimated, 41))

# ๐ŸŽ›๏ธ Slider setup (block size removed)
baseline_slider = widgets.FloatSlider(value=0.1, min=0.01, max=100.0, step=0.01, description='Baseline (m):')
focal_slider = widgets.FloatSlider(value=800, min=1, max=200, step=50, description='Focal Length (px):')
output = widgets.Output()

# ๐Ÿงฎ Interactive update function
def update_disparity(change=None):
    output.clear_output()
    baseline = baseline_slider.value
    focal = focal_slider.value
    block_size = estimate_block_size(img_left, img_right)

    # StereoBM computation
    stereo = cv2.StereoBM_create(numDisparities=64, blockSize=block_size)
    disparity = stereo.compute(img_left, img_right).astype(np.float32) / 16.0
    disparity[disparity <= 0] = 0.1  # prevent divide by zero

    # Depth calculation
    depth_map = (focal * baseline) / disparity

    with output:
        fig, axs = plt.subplots(2, 2, figsize=(10, 8))

        axs[0, 0].imshow(img_left, cmap='gray')
        axs[0, 0].set_title("Left View")
        axs[0, 0].axis('off')

        axs[0, 1].imshow(img_right, cmap='gray')
        axs[0, 1].set_title("Right View")
        axs[0, 1].axis('off')

        disp_img = axs[1, 0].imshow(disparity, cmap='plasma')
        axs[1, 0].set_title(f"Disparity Map\n(Block Size: {block_size})")
        axs[1, 0].axis('off')
        fig.colorbar(disp_img, ax=axs[1, 0], fraction=0.046, pad=0.04, label='Disparity Value')

        depth_img = axs[1, 1].imshow(depth_map, cmap='viridis')
        axs[1, 1].set_title("Estimated Depth")
        axs[1, 1].axis('off')
        fig.colorbar(depth_img, ax=axs[1, 1], fraction=0.046, pad=0.04, label='Depth (m)')


        plt.suptitle("Stereo Depth Estimation", fontsize=14)
        plt.tight_layout()
        plt.show()

# ๐Ÿ”„ Hook sliders
for s in [baseline_slider, focal_slider]:
    s.observe(update_disparity, names='value')

# ๐Ÿš€ Launch interface
display(widgets.VBox([baseline_slider, focal_slider, output]))
update_disparity()

4. Simulation#

๐ŸŽฏ Camera Calibration using Chessboard Pattern in OpenCV#

This code performs intrinsic camera calibration by detecting a known checkerboard pattern in multiple images.


๐Ÿ“ What It Does#

  • Defines a checkerboard grid size for calibration

  • Prepares 3D object points (real-world coordinates)

  • Detects corresponding 2D image points from calibration images

  • Refines corner positions using sub-pixel accuracy

  • Estimates camera parameters using cv2.calibrateCamera()


๐Ÿ“‹ Outputs#

  • Camera Matrix: Contains intrinsic parameters like focal length and optical center

  • Distortion Coefficients: Lens distortion values

  • Rotation & Translation Vectors: Pose information per calibration image


โœ… Use Case#

Essential for correcting lens distortion and enabling accurate 3D reconstruction, particularly useful in UAV vision, robotics, and photogrammetry.

import cv2
import numpy as np
import glob

# ๐Ÿงฎ Checkerboard dimensions (number of inner corners per row and column)
CHECKERBOARD = (9, 6)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# ๐Ÿ“Œ Prepare object points like (0,0,0), (1,0,0), ..., (8,5,0)
objp = np.zeros((CHECKERBOARD[0]*CHECKERBOARD[1], 3), np.float32)
objp[:, :2] = np.mgrid[0:CHECKERBOARD[0], 0:CHECKERBOARD[1]].T.reshape(-1, 2)

objpoints = []  # 3D points in real world space
imgpoints = []  # 2D points in image plane

# ๐Ÿ–ผ๏ธ Load your calibration images
images = glob.glob('calibration_images/*.jpeg')

image_shape = None  # Add this before the loop

for fname in images:
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    ret, corners = cv2.findChessboardCorners(gray, CHECKERBOARD, None)
    if ret:
        objpoints.append(objp)
        corners2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
        imgpoints.append(corners2)

        image_shape = gray.shape[::-1]  # Save the shape for calibration

        cv2.drawChessboardCorners(img, CHECKERBOARD, corners2, ret)
        cv2.imshow('Corners', img)
        cv2.waitKey(500)

cv2.destroyAllWindows()

# Make sure shape was successfully captured
if image_shape is None:
    raise ValueError("No valid calibration image found. Make sure the pattern was detected.")

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, image_shape, None, None)


for fname in images:
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    # ๐Ÿ”Ž Find the chessboard corners
    ret, corners = cv2.findChessboardCorners(gray, CHECKERBOARD, None)

    if ret:
        objpoints.append(objp)
        corners2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
        imgpoints.append(corners2)

        # ๐ŸŽฏ Visualize corners
        cv2.drawChessboardCorners(img, CHECKERBOARD, corners2, ret)
        cv2.imshow('Corners', img)
        cv2.waitKey(500)

cv2.destroyAllWindows()

# ๐Ÿงฎ Calibrate camera
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

# ๐Ÿ“‹ Display results
print("Camera Matrix (Intrinsic parameters):\n", mtx)
print("\nDistortion Coefficients:\n", dist)
print("\nRotation Vectors:\n", rvecs[0])
print("\nTranslation Vectors:\n", tvecs[0])
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[33], line 41
     39 # Make sure shape was successfully captured
     40 if image_shape is None:
---> 41     raise ValueError("No valid calibration image found. Make sure the pattern was detected.")
     43 ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, image_shape, None, None)
     46 for fname in images:

ValueError: No valid calibration image found. Make sure the pattern was detected.